# RoBERTa architecture optimization

Robbert 2023 Dutch Base
MIT
RobBERT-2023 is the latest language model based on Dutch. It adopts the RoBERTa architecture and uses an updated Dutch tokenizer and training data to adapt to the latest development and changes in Dutch.
Large Language Model Transformers Other
R
DTAI-KULeuven
339
7
Roberta Psych
Pre-trained model based on RoBERTa architecture, specifically optimized for psychological counseling and therapy dialogue texts
Large Language Model Transformers
R
mlaricheva
13
2
Bertweetbr
Apache-2.0
BERTweet.BR is a language model specifically pre-trained for Portuguese tweets, based on the RoBERTa architecture and trained on 100 million Portuguese tweets.
Large Language Model Transformers Other
B
melll-uff
339
10
Bsc Bio Ehr Es
Apache-2.0
Pre-trained language model optimized for Spanish biomedical and clinical texts, supporting clinical NLP tasks
Large Language Model Transformers Spanish
B
PlanTL-GOB-ES
624
12
Robertuito Sentiment Analysis
Spanish tweet sentiment analysis model based on RoBERTuito, supporting POS(positive)/NEG(negative)/NEU(neutral) three-class sentiment classification
Text Classification Spanish
R
pysentimiento
1.0M
88
Magbert Ner
Apache-2.0
MagBERT-NER is a cutting-edge named entity recognition model specifically designed for Moroccan French (Maghreb region).
Sequence Labeling Transformers French
M
TypicaAI
37
4
Palobert Base Greek Uncased V1
Greek language model based on RoBERTa, specifically designed for Greek social media text
Large Language Model Transformers Other
P
gealexandri
20
4
Wangchanberta Base Att Spm Uncased
A RoBERTa BASE model pre-trained on 78.5GB of Thai text, suitable for masked language modeling and text classification tasks
Large Language Model Transformers Other
W
airesearch
34.38k
43
Roberta Base Ca
Apache-2.0
Catalan pre-trained language model based on RoBERTa architecture, developed by Spanish government agencies
Large Language Model Transformers Other
R
PlanTL-GOB-ES
15.56k
5
Quberta
Pretrained model based on RoBERTa architecture for Southern Quechua (Collao and Chanka dialects), trained with 5 million tokens
Large Language Model Transformers Other
Q
Llamacha
289
1
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase